首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6774篇
  免费   1143篇
  国内免费   735篇
电工技术   317篇
技术理论   1篇
综合类   924篇
化学工业   157篇
金属工艺   105篇
机械仪表   620篇
建筑科学   476篇
矿业工程   178篇
能源动力   82篇
轻工业   279篇
水利工程   128篇
石油天然气   121篇
武器工业   42篇
无线电   962篇
一般工业技术   541篇
冶金工业   116篇
原子能技术   32篇
自动化技术   3571篇
  2024年   19篇
  2023年   100篇
  2022年   187篇
  2021年   227篇
  2020年   290篇
  2019年   185篇
  2018年   202篇
  2017年   231篇
  2016年   268篇
  2015年   341篇
  2014年   467篇
  2013年   461篇
  2012年   509篇
  2011年   539篇
  2010年   493篇
  2009年   441篇
  2008年   507篇
  2007年   519篇
  2006年   419篇
  2005年   347篇
  2004年   310篇
  2003年   279篇
  2002年   186篇
  2001年   152篇
  2000年   154篇
  1999年   119篇
  1998年   111篇
  1997年   85篇
  1996年   107篇
  1995年   79篇
  1994年   65篇
  1993年   43篇
  1992年   53篇
  1991年   24篇
  1990年   28篇
  1989年   25篇
  1988年   15篇
  1987年   16篇
  1986年   8篇
  1985年   14篇
  1984年   4篇
  1983年   9篇
  1982年   2篇
  1981年   1篇
  1980年   3篇
  1979年   2篇
  1978年   2篇
  1976年   1篇
  1974年   1篇
  1973年   2篇
排序方式: 共有8652条查询结果,搜索用时 15 毫秒
101.
提出钢表层脱碳层深度测定的一种新方法,即电子探针面分析法。对热轧态高碳工具钢75Cr1、淬火齿轮钢20CrMnTi、冷轧退火低碳车厢板SH1100钢3个典型钢种完成脱碳层深度的测定试验,结果表明:对于珠光体-铁素体型中高碳钢的脱碳层深度测定,金相法测定值是电子探针面分析法测定值的77%左右;对于淬回火钢的脱碳层深度测定,电子探针面分析法的测量精度在10 μm以下,约是显微硬度法测量精度的十分之一;对于脱碳层深度不易准确测定的低碳钢,电子探针面分析法通过较大区域范围的平均碳含量变化曲线来测定脱碳层深度,测量结果准确。电子探针面分析法在脱碳层深度测定方面具有钢种组织状态不限、试样形状不限、数据统计性好、测量精度高、易操作等优势,可在业内大力推广。  相似文献   
102.
Modern manufacturing enterprises are shifting toward multi-variety and small-batch production. By optimizing scheduling, both transit and waiting times within the production process can be shortened. This study integrates the advantages of a digital twin and supernetwork to develop an intelligent scheduling method for workshops to rapidly and efficiently generate process plans. By establishing the supernetwork model of a feature-process-machine tool in the digital twin workshop, the centralized and classified management of multiple data types can be realized. A feature similarity matrix is used to cluster similar attribute data in the feature layer subnetwork to realize rapid correspondence of multi-source association information among feature-process-machine tools. Through similarity calculations of decomposed features and the mapping relationships of the supernetwork, production scheduling schemes can be rapidly and efficiently formulated. A virtual workshop is also used to simulate and optimize the scheduling scheme to realize intelligent workshop scheduling. Finally, the efficiency of the proposed intelligent scheduling strategy is verified by using a case study of an aeroengine gear production workshop.  相似文献   
103.
Pixel-value-ordering (PVO) technique refers to the process of first ranking the pixels in a block and then modifying the maximum/minimum for reversible data hiding (RDH). This paper discusses the PVO embedding in two-dimensional (2D) space and utilizes the prediction-error pair within a block for data embedding. We focus on not only the exploitation of conventional PVO embedding but also its effective implementation in 2D form. The PVO embedding is extended into a 2D form by integrating the pairwise prediction-error expansion, and a reversible 2D mapping adapted to the special distribution of prediction-error pairs is proposed. Moreover, an adaptive mapping selection mechanism is proposed to treat separately rough and smooth prediction-error pairs to further optimize the embedding performance. Experimental results show that the proposed method outperforms the previous PVO-based methods.  相似文献   
104.
Rough set theory is a useful tool for dealing with imprecise knowledge. One of the advantages of rough set theory is the fact that an unknown target concept can be approximately characterized by existing knowledge structures in a knowledge base. This paper explores knowledge structures in a knowledge base. Knowledge structures in a knowledge base are firstly described by means of set vectors and relationships between knowledge structures divided into four classes. Then, properties of knowledge structures are discussed. Finally, group, lattice, mapping, and soft characterizations of knowledge structures are given.  相似文献   
105.
This paper introduces a synergic predator-prey optimization (SPPO) algorithm to solve economic load dispatch (ELD) problem for thermal units with practical aspects. The basic PPO model comprises prey and predator as essential components. SPPO uses collaborative decision for movement and direction of prey and maintains diversity in the swarm due to fear factor of predator, which acts as the baffled state of preys’ mind. In the SPPO, the decision making of prey is bifurcated into corroborative and impeded parts. It comprises four behaviors namely inertial, cognitive, collective swarm intelligence, and prey's individual and neighborhood concern of predator. The prey particle memorizes its best and not-best positions as experiences. In this research work, to improve the quality of prey swarm, which influence convergence rate, opposition based initialization is used. To verify robustness of proposed algorithm general benchmark problems and small, medium, and large power generation test power system are simulated. These test systems have non-linear behavior due to multi-fuel options and practical constraints. The constraints of prohibited operating zone and ramp rate limits of power generators’ are handled using heuristics. Newton–Raphson procedure is exploited to attain the transmission losses using load flow analysis. The outcomes of SPPO are compared with the results described in literature and are found satisfactory.  相似文献   
106.
Spider aciniform (wrapping) silk is a remarkable fibrillar biomaterial with outstanding mechanical properties. It is a modular protein consisting, in Argiope trifasciata, of a core repetitive domain of 200 amino acid units (W units). In solution, the W units comprise a globular folded core, with five α-helices, and disordered tails that are linked to form a ~63-residue intrinsically disordered linker in concatemers. Herein, we present nuclear magnetic resonance (NMR) spectroscopy-based 15N spin relaxation analysis, allowing characterization of backbone dynamics as a function of residue on the ps–ns timescale in the context of the single W unit (W1) and the two unit concatemer (W2). Unambiguous mapping of backbone dynamics throughout W2 was made possible by segmental NMR active isotope-enrichment through split intein-mediated trans-splicing. Spectral density mapping for W1 and W2 reveals a striking disparity in dynamics between the folded core and the disordered linker and tail regions. These data are also consistent with rotational diffusion behaviour where each globular domain tumbles almost independently of its neighbour. At a localized level, helix 5 exhibits elevated high frequency dynamics relative to the proximal helix 4, supporting a model of fibrillogenesis where this helix unfolds as part of the transition to a mixed α-helix/β-sheet fibre.  相似文献   
107.
In this paper, we present a new technique for displaying High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays in an efficient way on the GPU. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as the reference that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO. Finally, we present a GPU version, which is perceptually equal to the standard version but with much improved computational performance.  相似文献   
108.
Vast majority of practical engineering design problems require simultaneous handling of several criteria. For the sake of simplicity and through a priori preference articulation one can turn many design tasks into single-objective problems that can be handled using conventional numerical optimization routines. However, in some situations, acquiring comprehensive knowledge about the system at hand, in particular, about possible trade-offs between conflicting objectives may be necessary. This calls for multi-objective optimization that aims at identifying a set of alternative, Pareto-optimal designs. The most popular solution approaches include population-based metaheuristics. Unfortunately, such methods are not practical for problems involving expensive computational models. This is particularly the case for microwave and antenna engineering where design reliability requires utilization of CPU-intensive electromagnetic (EM) analysis. In this work, we discuss methodologies for expedited multi-objective design optimization of expensive EM simulation models. The solution approaches that we present here rely on surrogate-based optimization (SBO) paradigm, where the design speedup is obtained by shifting the optimization burden into a cheap replacement model (the surrogate). The latter is utilized for generating the initial approximation of the Pareto front representation as well as further front refinement (to elevate it to the high-fidelity EM simulation model level). We demonstrate several application case studies, including a wideband matching transformer, a dielectric resonator antenna and an ultra-wideband monopole antenna. Dimensionality of the design spaces in the considered examples vary from six to fifteen, and the design optimization cost is about one hundred of high-fidelity EM simulations of the respective structure, which is extremely low given the problem complexity.  相似文献   
109.
The age-group composition of populations varies considerably across the world, and obtaining accurate, spatially detailed estimates of numbers of children under 5 years is important in designing vaccination strategies, educational planning or maternal healthcare delivery. Traditionally, such estimates are derived from population censuses, but these can often be unreliable, outdated and of coarse resolution for resource-poor settings. Focusing on Nigeria, we use nationally representative household surveys and their cluster locations to predict the proportion of the under-five population in 1 × 1 km using a Bayesian hierarchical spatio-temporal model. Results showed that land cover, travel time to major settlements, night-time lights and vegetation index were good predictors and that accounting for fine-scale variation, rather than assuming a uniform proportion of under 5 year olds can result in significant differences in health metrics. The largest gaps in estimated bednet and vaccination coverage were in Kano, Katsina and Jigawa. Geolocated household surveys are a valuable resource for providing detailed, contemporary and regularly updated population age-structure data in the absence of recent census data. By combining these with covariate layers, age-structure maps of unprecedented detail can be produced to guide the targeting of interventions in resource-poor settings.  相似文献   
110.
Advances in modern manufacturing techniques increase production efficiency but, at the same time, present new tasks and challenges for coordinate metrology and the manufacturers of Coordinate Measuring Machines (CMMs). The main goal of current research efforts is improving measurement accuracy. Seeing as many of the possible solutions regarding CMM construction had already been explored, there seems to be little left for improvement in that field. Further efforts at accuracy improvement rely mostly on using sophisticated mathematical algorithms designed to correct relevant errors. Many types of errors could be compensated using this approach, including: probe head errors, machine dynamics errors and, most importantly, machine geometrical errors. Almost all coordinate measuring machines produced nowadays are equipped with geometrical errors compensation matrix known as the CAA matrix (Computer Aided Accuracy).CAA matrices are based on a grid of reference points (nodes) in which certain values of the components of geometrical errors are determined experimentally. The error values between the nodes are estimated using simple interpolation methods. Theoretically, a higher density of reference points on the grid describing the CAA matrix should improve the accuracy of the machine utilizing the matrix. On the other hand, increasing the number of nodes simultaneously increases the amount of workload, time and money spent on constructing the CAA matrix. This paper presents a number of experiments aimed at creating CAA matrices with different number of matrix nodes using the LaserTracer system. The relations between maximum permissible errors obtained on a machine using matrices with different densities of nodes are also discussed. Additionally, the authors attempt to tackle the question of determining the most optimal density of nodes with regards to the ratio of time spent on matrix creation and the effect on accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号